138 research outputs found

    Matching Image Sets via Adaptive Multi Convex Hull

    Get PDF
    Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV), 201

    Robust Face Recognition for Data Mining

    Get PDF
    While the technology for mining text documents in large databases could be said to be relatively mature, the same cannot be said for mining other important data types such as speech, music, images and video. Yet these forms of multimedia data are becoming increasingly prevalent on the internet and intranets as bandwidth rapidly increases due to continuing advances in computing hardware and consumer demand. An emerging major problem is the lack of accurate and efficient tools to query these multimedia data directly, so we are usually forced to rely on available metadata such as manual labeling. Currently the most effective way to label data to allow for searching of multimedia archives is for humans to physically review the material. This is already uneconomic or, in an increasing number of application areas, quite impossible because these data are being collected much faster than any group of humans could meaningfully label them - and the pace is accelerating, forming a veritable explosion of non-text data. Some driver applications are emerging from heightened security demands in the 21st century, postproduction of digital interactive television, and the recent deployment of a planetary sensor network overlaid on the internet backbone

    Illumination and Expression Invariant Face Recognition With One Sample Image

    Get PDF
    Most face recognition approaches either assume constant lighting condition or standard facial expressions, thus cannot deal with both kinds of variations simultaneously. This problem becomes more serious in applications when only one sample image per class is available. In this paper, we present a linear pattern classification algorithm, Adaptive Principal Component Analysis (APCA), which first applies PCA to construct a subspace for image representation; then warps the subspace according to the within-class covariance and between-class covariance of samples to improve class separability. This technique performed well under variations in lighting conditions. To produce insensitivity to expressions, we rotate the subspace before warping in order to enhance the representativeness of features. This method is evaluated on the Asian Face Image Database. Experiments show that APCA outperforms PCA and other methods in terms of accuracy, robustness and generalization ability

    Person Location Service on the Planetary Sensor Network

    Get PDF
    This paper gives a prototype application which can provide a person location service on the IrisNet. Two crucial technologies face detection and face recognition underpinning such image and video data mining service are explained. For the face detection, authors use 4 types of simple rectangles as features, Adaboost as the learning algorithm to select the important features for classification, and finally generate a cascade of classifiers which is extremely fast on the face detection task. As for the face recognition, the authors develop Adaptive Principle Components Analysis (APCA) to improve the robustness of principal Components Analysis (PCA) to nuisance factors such as lighting and expression. APCA also can recognize faces from single face which is suitable in a data mining situatio

    Face Recognition with One Sample Image per Class

    Get PDF
    There are two main approaches for face recognition with variations in lighting conditions. One is to represent images with features that are insensitive to illumination in the first place. The other main approach is to construct a linear subspace for every class under the different lighting conditions. Both of these techniques are successfully applied to some extent in face recognition, but it is hard to extend them for recognition with variant facial expressions. It is observed that features insensitive to illumination are highly sensitive to expression variations, which result in face recognition with changes in both lighting conditions and expressions a difficult task. We propose a new method called Affine Principal Components Analysis in an attempt to solve both of these problems. This method extract features to construct a subspace for face representation and warps this space to achieve better class separation. The proposed technique is evaluated using face databases with both variable lighting and facial expressions. We achieve more than 90% accuracy for face recognition by using only one sample image per class

    Real-Time MMX-Accelerated Image Stabilization System

    Get PDF
    Recent advances in computer hardware and software have led to the possibility of implementing low cost real-time computer vision systems on conventional PC hardware based upon the Intel processor supporting the Multimedia Extended instruction set (MMX). Here we describe a demonstration image stabilization system for real-time removal or minimization of unwanted scene motion due to rapid camera motion. Typical applications would be the removal of handshaking in hand held cameras or the removal of vibration effects from a camera mounted on a moving car or boat. Unlike many commercial stabilization systems, this system suppresses the effects of both severe translation and rotation of the camera

    Classification of Human Epithelial Type 2 Cell Indirect Immunofluoresence Images via Codebook Based Descriptors

    Full text link
    The Anti-Nuclear Antibody (ANA) clinical pathology test is commonly used to identify the existence of various diseases. A hallmark method for identifying the presence of ANAs is the Indirect Immunofluorescence method on Human Epithelial (HEp-2) cells, due to its high sensitivity and the large range of antigens that can be detected. However, the method suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg., speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. In this paper, we propose a cell classification system comprised of a dual-region codebook-based descriptor, combined with the Nearest Convex Hull Classifier. We evaluate the performance of several variants of the descriptor on two publicly available datasets: ICPR HEp-2 cell classification contest dataset and the new SNPHEp-2 dataset. To our knowledge, this is the first time codebook-based descriptors are applied and studied in this domain. Experiments show that the proposed system has consistent high performance and is more robust than two recent CAD systems

    Representative feature chain for single gallery image face recognition

    Get PDF
    Under the constraint of using only a single gallery image per person, this paper proposes a fast multi-class pattern classification approach to 2D face recognition robust to changes in pose, illumination, and expression (PIE). This work has three main contributions: (1) we propose a representative face space method to extract robust features, (2) we apply a learning method to weight features in pairs, (3) we combine the feature pairs into a feature chain in order to find the weights for all features. The approach is evaluated for face recognition under PIE changes on three public databases. Results show that the method performs considerably better than several other appearance-based methods and can reliably recognise faces at large pose angles without the need for fragile pose estimation pre-processing. Moreover,computational load is low (comparable to standard eigenface methods), which is a critical factor in wide-area surveillance applications
    • …
    corecore